Online and stochastic learning has emerged as powerful tool in large scaleoptimization. In this work, we generalize the Douglas-Rachford splitting (DRs)method for minimizing composite functions to online and stochastic settings (toour best knowledge this is the first time DRs been generalized to sequentialversion). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRsmethod. Then we proved that the online DRs splitting method enjoy an $O(1)$regret bound and stochastic DRs splitting has a convergence rate of$O(1/\sqrt{T})$. The proof is simple and intuitive, and the results andtechnique can be served as a initiate for the research on the large scalemachine learning employ the DRs method. Numerical experiments of the proposedmethod demonstrate the effectiveness of the online and stochastic update rule,and further confirm our regret and convergence analysis.
展开▼